New Year, New You, New Heights. 🥂🍾 Kick Off 2024 with 70% OFF!
I WANT IT! 🤙Operation Rescue is underway: 70% OFF on 12Min Premium!
New Year, New You, New Heights. 🥂🍾 Kick Off 2024 with 70% OFF!
This microbook is a summary/original review based on the book: A Short History of Nearly Everything
Available for: Read online, read in our mobile apps for iPhone/Android and send in PDF/EPUB/MOBI to Amazon Kindle.
ISBN: 0767908171
Publisher: Broadway Books
In case the title itself isn’t a giveaway, “A Short History of Nearly Everything” is a book about “how we went from there being nothing at all to there being something, and then how a little of that something turned into us, and also what happened in between and since.” That’s a lot of ground to cover, so get ready for the intellectual odyssey of a lifetime!
There are more protons in the dot of an “i” than there are seconds in half a million years. That’s how infinitesimal protons are. And yet, if you want to understand how our universe began, you’ll need to shrink one of those protons down to a billionth of its normal size. Next, you’ll need to gather up everything there is in our universe and squeeze it into that tiniest of spots. Scientists nowadays refer to this spot as a singularity.
The initial singularity was so compact it practically had no dimensions. Yet, it contained within itself all the energy and space-time of the known universe. One not-so-very special day about 13.7 billion years ago, for some mysterious reason, the singularity, figuratively speaking, just exploded. This was the Big Bang, the moment when everything began, including space and time.
Then, in a single fraction of a moment, gravity and all the other forces that govern physics were produced. The singularity inflated, undergoing “a sudden, dramatic expansion” and assuming “heavenly dimensions, space beyond conception.” In less than a minute, that single point of nothingness grew to be million billion miles across and continued to double in size every 10-34 seconds. Within the first three minutes – the time it takes you to make a single sandwich – 98% of all the matter there is, or will ever be, was produced.
Even though at a much slower pace, the universe has continued expanding ever since. It has also continued evolving, creating hundreds of billions of galaxies. We don’t really know why. But we do know that, as Bryson writes, “if the universe had formed just a tiny bit differently – if gravity were fractionally stronger or weaker, if the expansion had proceeded just a little more slowly or swiftly – then there might never have been stable elements to make you and me and the ground we stand on.” So, things turned pretty well for us. In fact, they turned out perfectly – in the very literal sense of that word.
The universe is ungraspably vast. To put things into perspective, imagine climbing aboard a rocket ship that travels at the speed of light, the maximum speed at which all energy and matter can travel in our universe. It would take you seven hours just to get to Pluto, sitting at the edge of our solar system. At its center, our solar system hosts the sun, an average-sized star which is just one of the hundreds of billions of stars in our galaxy, the Milky Way.
The Milky Way, in turn, is merely one of 140 billion or so other galaxies in the universe, many of them much larger than ours. That’s such a big number that, to use Bryson’s analogy, “if galaxies were frozen peas, it would be enough to fill a large auditorium.” Now, what are the chances that our galaxy, our solar system, and, finally, our planet, the Earth, are so special and privileged that they are the only ones to harbor any kind of life in the entire universe?
Statistically, incredibly slim. The probability that there is life – and even other thinking beings out there – is very high. In the 1960s, a Cornell professor named Frank Drake calculated that even with the most conservative estimates, there should be millions of advanced civilizations in the Milky Way alone. For better or for worse, that doesn’t mean that we’re not alone. Once again, the universe is so vast that even if there are aliens, it’s highly unlikely that we’ll ever be able to contact them.
Think about it this way. The average distance between stars in the Milky Way is about 30 trillion miles, or more than five light years. Even at very high speeds, these are “fantastically challenging distances for any traveling individual.” Moreover, there’s a lot of empty space out there. And we do mean, a lot. As astronomer Carl Sagan once worked out, “if we were randomly inserted into the universe, the chances that you would be on or near a planet would be less than one in a billion trillion trillion.” (That's one, followed by 33 zeros.) Worlds, to quote Sagan’s conclusion, are rare and precious. They are also almost destined to be lonely.
About 4.6 billion years ago, a great swirl of gas and dust accumulated in space and began to aggregate. 99.9% of it went to make a star. Goaded by its gravity, the remaining 0.1% of floating dust grains continued bumping and colliding with each other, until they differentiated into eight large clumps and about 200 smaller clusters. The star, of course, was the sun, the eight large clumps – the planets of our Solar system, and the 200 smaller clusters – their natural satellites, the moons.
In the beginning, there was nothing special about our planet. But over the course of the next 500 million years, frequent comet and meteorite visits brought water and all the components necessary for the successful formation of life. Scientists have identified more than 20 “particularly helpful breaks” that made the Earth a planet suitable for life. These are the main four:
The Earth started forming an atmosphere when it was only about a third of its current size, mostly composed of carbon dioxide, nitrogen, methane, and sulfur. Life formed both thanks to and out of this “noxious stew.” Carbon dioxide, for one, is a powerful greenhouse gas, meaning it’s capable of containing the sun’s rays and warming the planet. Today that’s a problem, but four billion years ago it was a blessing. The sun was dimmer back then and if it hadn’t been for the greenhouse effect, the Earth might have frozen over permanently. Thankfully, it didn’t.
In 1953, Harold Urey and Stanley Miller tried replicating the early-Earth atmosphere in a glass flask. Afterward, to simulate lightning, they passed electrical discharges through the solution made of water, methane, ammonia and hydrogen sulphone. After a few days, the water in the flask turned green and yellow. When they analyzed it, they were stunned to discover that the substances had reacted to create amino acids, sugars and other organic compounds. “If God didn't do it this way,” observed Urey at the time, “he missed a good bet.”
The truth is we still don’t know how life began precisely, but chances are Urey and Miller couldn’t have been far off. About four billion years ago, “some little bag of chemicals fidgeted to life” in some of the Earth’s oceans and then “cleaved itself and produced an heir.” It has never stopped moving, dividing and evolving ever since. Everything that has ever lived dates back to this remarkable event. By way of analogy with the Big Bang, biologists refer to it as the Big Birth.
The process kickstarted by the Big Birth eventually created bacteria which were the only forms of life in our solar system for two billion years. At some point during this period, cyanobacteria, or blue-green algae, learned to absorb water molecules and sup on the hydrogen inside it, while releasing oxygen as waste. In so doing, these ancient cyanobacteria invented photosynthesis, “undoubtedly the most important single metabolic innovation in the history of life on the planet,” according to the already mentioned Carl Sagan.
Oxygen was toxic for some organisms, but it was tolerable for others. So, some organisms failed to adapt and died, while others started using oxygen to produce energy more efficiently. One day, “an adventuresome bacterium” invaded another bacterium; it turned out that the symbiosis suited them both. The captive bacterium eventually became a mitochondrion, an organelle highly specialized into manipulating oxygen in a way that liberates energy from foodstuffs. This mitochondrial invasion produced the first eukaryotic, or complex, cell. Eventually single-celled eukaryotes started joining together into multicellular beings. In time, “big, complicated, visible entities” like us became possible.
Cells are, however, very small. Even the most complex ones. Human cells, for example – about 37 trillion in number – are essentially countries of ten thousand trillion citizens, each of them devoted in a specific way to your overall well-being. Nonetheless, most of them cannot be seen without the aid of a microscope. English polymath Robert Hooke used one in 1665 to observe and describe a cell for the first time in history.
The discovery prompted scientists to think about the origin and progress of life, but it was not until 1844 that another brilliant Englishman by the name of Charles Darwin found enough evidence to trace our ancestry back to “a protoplasmal primordial atomic globule,” as Gilbert and Sullivan’s opera “Mikado” puts it. Fearing religious persecution, Darwin made his notes public only a decade and a half later, in late November 1859. “There is grandeur in this view of life,” he wrote back then, “from so simple a beginning, endless forms most beautiful and most wonderful have been, and are being, evolved.”
Three years before Darwin published “Origin of Species,” about 800 miles away in a tranquil corner of Middle Europe, a retiring Augustinian friar named Gregor Mendel started conducting pea plant experiments. He published the results of these experiments in 1866, but nobody really took note of them. In fact, in the following 35 years, Mendel’s paper was cited fewer than five times. As a result, it never reached Darwin, who died in April 1882, just 18 months before Mendel did. Too bad, since the paper established many of the rules of heredity and unwittingly demonstrated the inner workings of natural selection.
What Mendel discovered was that the traits of edible peas are somehow controlled by some invisible “factors” (as he called them) which combine together from both parents in ways that can be determined with satisfactory accuracy. Moreover, he discovered that these factors retain their integrity after fertilization and are passed on to the next generations, even if their effects are not manifested in the current one. We now call these factors “genes.” We also consider the once-ignored Mendel the founder of the discipline that studies them – genetics.
Thanks to the genius of some scientists such as the eccentric British-Indian polymath J. B. S. Haldane, Mendel’s ideas on heredity were united with Darwinian principles of evolution in a joint mathematical framework named the Modern Synthesis by English evolutionary biologist Julian Huxley in 1942. Just 11 years later, James Watson and Francis Crick identified the double-stranded molecular structure of the DNA, the final piece of the evolutionary puzzle. They published their findings in the April 25, 1953 edition of “Nature” in a 900-word article that is now considered a watershed moment in the history of biology and our understanding of life and everything related.
However, the article was largely ignored by the mainstream media at the time, which was more interested in Sir Edmund Hillary’s conquering of Mount Everest and Elizabeth II becoming the crowned queen of England. It took about 25 years for Crick and Watson’s model of DNA to go from being plausible to being virtually certainly correct.
Darwin’s theory of evolution has been deservedly called “the single best idea that anyone has ever had.” It explained impossible complexity in a very simple way. Crick and Watson’s identification of the structure of DNA made the explanation even simpler, in all of its complexity. The two were so much aware of this fact that as soon as they devised the model, they ran to the local pub and announced to everybody that they had discovered “the secret of life.” Modern science concurs.
However, so many things happened before and in between that made all of this possible. For example, Francis and Crick got their idea for the double-helix structure of DNA from looking at the now-famous “Photo 51,” an X-ray diffraction image of crystallized DNA, taken by English chemist Rosalind Franklin in May 1952. The two got all the glory and a Nobel Prize; Franklin, instead, got ovarian cancer and died in 1958. Just 24 years before that, chronic overexposure to radiation claimed the life of another brilliant female scientist, Marie Curie, the only person to win the Nobel Prize in two scientific fields, chemistry and physics.
Thanks, in large part, to Curie’s discoveries, an American physical chemist named Willard Libby was able to develop radiocarbon dating in the years following World War II. The process revolutionized paleontology and archaeology by allowing scientists to get accurate readings of the age of bones and other organic remains. Around the time when Hooke described the first cell, James Ussher, an Archbishop of the Church of Ireland, rummaged through references in the Old Testament to conclude that the Earth had been created at midday on October 23, 4004 B.C.
While few took Ussher seriously even back in the 17th century, before radiocarbon dating, “the oldest reliable dates went back no further than the First Dynasty in Egypt from about 3000 B.C.” Thanks to Libby, paleontologists and archaeologists could afterward estimate, with satisfactory degrees of confidence, when the last ice age ended or when the dinosaurs disappeared. All they had to do was measure the amount of remaining radioactive elements in a rock or a fossil. Since radioactive elements decay at measurable rates, this made it possible for scientists to calculate the age of almost anything. One of them, an American geochemist named Clair Paterson, perfected the dating methods and, in 1956, using a meteorite, calculated an age for the Earth of 4.55 billion years. Suffice to say that Ussher was off. Way off.
“If I have seen further,” wrote Isaac Newton in 1675 in a letter to the aforementioned Robert Hooke, “it is by standing on the shoulders of giants.” Newton was a giant himself, one of the most towering in history. Just 12 years after he made this claim, he published “Mathematical Principles of Natural Philosophy,” simultaneously one of the most inaccessible and influential books ever written. In it, Newton laid the foundations of classical mechanics by formulating the universal laws of motion and mathematically explaining gravity. About two centuries after his death, a seemingly ordinary patent examiner at the Swiss Patent Office in Bern put a picture of Newton on his study wall. Later, he developed a groundbreaking theory that superseded some of Newton’s laws. He saw further. His name was Albert Einstein.
In a paper published on September 26, 1905, Einstein introduced the world to his special theory of relativity which demonstrated two things: that the laws of physics are the same for all non-accelerating observers, and that the speed of light in a vacuum is identical for all observers, regardless of their motion. However, for the speed of light to remain constant, some other things, such as distance and time, had to become relative. So, what Einstein suggested in 1905 was something quite bizarre: that time passed at different speeds for different observers, relative to their position and velocity.
He didn’t stop there. Merely two weeks later, in a paper submitted almost as an afterthought to the first one, Einstein showed that mass and energy are interchangeable, and that energy equals mass times the speed of light squared. While others tried to wrap their minds around these strange findings and their even stranger consequences, Einstein spent the next decade trying to incorporate gravity in his relativistic framework. The result was the theory of general relativity, which in 1915 refined Newton’s law of universal gravitation by determining that gravity is nothing more but a warping or distortion in the space-time continuum.
Around the same time as Einstein was transforming our understanding of the universe, a group of scientists were trying to transform his. As early as 1900, a 42-year-old Berlin theoretical physicist named Max Planck unveiled the “quantum theory,” which suggested that energy doesn’t flow like water, but rather comes in individualized packets, which he termed quanta. As strange and unfathomable as that sounded at the turn of the century, brilliant intellectuals such as Niels Bohr, Werner Heisenberg, Wolfgang Pauli and Erwin Schrödinger would work hard over the next few decades to reach even weirder and less comprehensible conclusions.
Some of them – such as the notion that matter could pop into existence from nothing, or the idea that electrons are actually particles which can be described in terms of waves – seemed worthy of regard to Einstein. However, the idea that subatomic particles can telepathically influence each other at a distance really annoyed him, not the least because it was a stark violation of his special theory of relativity. “It seems hard to sneak a look at God's cards,” he said once. “But that He plays dice and uses ‘telepathic' methods […] is something that I cannot believe for a single moment.”
The “quantum physicists” didn’t succeed in changing Einstein’s mind. But they did manage to change the landscape of physics, irretrievably and permanently. Nowadays we know for a fact, as Richard Feynman once commented, that “things on a small scale behave nothing like things on a large scale.” The “telepathic” understanding of subatomic particles is nowadays referred to as “quantum entanglement” and has been demonstrated experimentally. Moreover, similar findings in quantum physics have led to many counterintuitive theories about our universe. Some of them question the Big Bang; others question almost everything we know about the universe – or, at least, everything we think we know.
For example, to make sense of the subatomic world, string theory suggests that one type of tiny particles, known as quarks, are actually strings – that is to say, “vibrating strands of energy that oscillate in eleven dimensions,” seven of which may be unknowable to us. But this is where, at least for the time being, everything becomes too weird to be knowable for the general reader. “Modern science, especially physics, is replete with outlandish ideas that defy common sense and intuition,” wrote English physicist Paul Davies in 2001. Could it be because our universe also defies common sense and intuition? Or is it because our intuition and common sense never evolved to understand it?
Colossally clever and immensely intelligible, “A Short History of Nearly Everything” is a marvel of a book. There may be no other like it. A must-read for anyone interested in the science of how the universe works.
Cherish your life, your family, your friends and the entire planet. In the grand scheme of things, all of them are rarities. The very fact they exist is a statistical miracle.
By signing up, you will get a free 7-day Trial to enjoy everything that 12min has to offer.
Bill Bryson is an American-British author of books covering a variety of topics - travel, the English language, science, and many others. He has worked for The New York Times and Independent. His bestselling books are: ‘’A Walk in the Woods,’’ ‘... (Read more)
Total downloads
on Apple Store and Google Play
of 12min users improve their reading habits
Grow exponentially with the access to powerful insights from over 2,500 nonfiction microbooks.
Start enjoying 12min's extensive library
Don't worry, we'll send you a reminder that your free trial expires soon
Free Trial ends here
Get 7-day unlimited access. With 12min, start learning today and invest in yourself for just USD $4.14 per month. Cancel before the trial ends and you won't be charged.
Start your free trialNow you can! Start a free trial and gain access to the knowledge of the biggest non-fiction bestsellers.